2 research outputs found

    Visual Prediction of Rover Slip: Learning Algorithms and Field Experiments

    Get PDF
    Perception of the surrounding environment is an essential tool for intelligent navigation in any autonomous vehicle. In the context of Mars exploration, there is a strong motivation to enhance the perception of the rovers beyond geometry-based obstacle avoidance, so as to be able to predict potential interactions with the terrain. In this thesis we propose to remotely predict the amount of slip, which reflects the mobility of the vehicle on future terrain. The method is based on learning from experience and uses visual information from stereo imagery as input. We test the algorithm on several robot platforms and in different terrains. We also demonstrate its usefulness in an integrated system, onboard a Mars prototype rover in the JPL Mars Yard. Another desirable capability for an autonomous robot is to be able to learn about its interactions with the environment in a fully automatic fashion. We propose an algorithm which uses the robot's sensors as supervision for vision-based learning of different terrain types. This algorithm can work with noisy and ambiguous signals provided from onboard sensors. To be able to cope with rich, high-dimensional visual representations we propose a novel, nonlinear dimensionality reduction technique which exploits automatic supervision. The method is the first to consider supervised nonlinear dimensionality reduction in a probabilistic framework using supervision which can be noisy or ambiguous. Finally, we consider the problem of learning to recognize different terrains, which addresses the time constraints of an onboard autonomous system. We propose a method which automatically learns a variable-length feature representation depending on the complexity of the classification task. The proposed approach achieves a good trade-off between decrease in computational time and recognition performance.</p

    Data Pruning

    Get PDF
    Could a training example be detrimental to learning? Contrary to the common belief that more training data is needed for better generalization, we show that the learning algorithm might be better off when some training examples are discarded. In other words, the quality of the examples matters. We explore a general approach to identify examples that are troublesome for learning with a given model and exclude them from the training set in order to achieve better generalization. We term this process 'data pruning'. The method is targeted as a pre-learning step in order to obtain better data to train on. The approach consists in creating multiple semi-independent learners from the dataset each of which is influenced differently by individual examples. The multiple learners' opinions about which example is difficult are arbitrated by an inference mechanism. Although, without guarantees of optimality, data pruning is shown to decrease the generalization error in experiments on real-life data. It is not assumed that the data or the noise can be modeled or that additional training examples are available. Data pruning is applied for obtaining visual category data with little supervision. In this setting the object data is contaminated with non-object examples. We show that a mechanism for pruning noisy datasets prior to learning can be very successful especially in the presence of large amount of contamination or when the algorithm is sensitive to noise. Our experiments demonstrate that data pruning can be worth while even if the algorithm has regularization capabilities or mechanisms to cope with noise and has a potential to be a more refined method for regularization or model selection.</p
    corecore